[1] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J].
Communications of the ACM, 2017, 60(6): 84-90.
[2] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//2016 IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR). IEEE, 2016: 770-778.
[3] 陈良臣,傅德印. 面向小样本数据的机器学习方法研究综述[J]. 计算机工程, 2022, 48(11): 1-13. CHEN L C, FU D Y. Survey
on machine learning methods for small sample data[J]. Computer Engineering, 2022, 48(11): 1-13.
[4] LI X, SUN Q, LIU Y, et al. Learning to self-train for semi-supervised few-shot classification[J]. Advances in neural information
processing systems, 2019, 32:10276-10286.
[5] AFRASIYABI A, LALONDE J F, GAGNÉ C. Associative alignment for few-shot image classification[C]//Computer
Vision–ECCV 2020: 16th European Conference. Glasgow, UK: Springer International Publishing, 2020: 18-35.
[6] LI J, WANG Z, HU X. Learning intact features by erasing-inpainting for few-shot classification[C]//Proceedings of the AAAI
conference on artificial intelligence. Palo Alto, CA: AAAI Press, 2021, 35(9): 8401-8409.
[7] XU J, LE H. Generating representative samples for few-shot classification[C]//2022 IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR). IEEE, 2022: 9003-9013.
[8] KINGMA D P , WELLING M .Auto-Encoding Variational Bayes[C]//International Conference on Learning Representations.
Ithaca, USA: NYarXiv.org, 2013.
[9] LI K, ZHANG Y, LI K, et al. Adversarial feature hallucination networks for few-shot learning[C]//2020 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 13470-13479.
[10] GAO H, SHOU Z, ZAREIAN A, et al. Low-shot learning via covariance-preserving adversarial augmentation networks[J]Advances in Neural Information Processing Systems, 2018, 31:983-993.
[11] HARIHARAN B, GIRSHICK R. Low-shot visual recognition by shrinking and hallucinating features[C]//2017 IEEE
International Conference on Computer Vision (ICCV). IEEE, 2017: 3037-3046.
[12] SCHWARTZ E, KARLINSKY L, SHTOK J, et al. Delta-encoder: an effective sample synthesis method for few-shot object
recognition[J]. Advances in neural information processing systems, 2018, 31:2850-2860.
[13] XU J, LE H, HUANG M, et al. Variational feature disentangling for fine-grained few-shot classification[C]// 2021 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 8812-8821.
[14] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[J]. Advances in neural information
processing systems, 2016, 29:3630-3638.
[15] REN M, TRIANTAFILLOU E, RAVI S, et al. Meta-learning for semi-supervised few-shot classification[J]. arXiv prepri
nt arXiv:1803.00676, 2018. https://doi.org/10.48550/arXiv.1803.00676.
[16] WAH C, BRANSON S, WELINDER P, et al. The caltech-ucsd birds-200-2011 dataset[J]. 2011.
[17] HE K, FAN H, WU Y, et al. Momentum contrast for unsupervised visual representation learning[C]//2020 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020: 9729-9738.
[18] OORD A, LI Y, VINYALS O. Representation learning with contrastive predictive coding[J]. arXiv preprint arXiv:1807.03748,
2018. https://doi.org/10.48550/arXiv.1807.03748.
[19] LIN X, DUAN Y, DONG Q, et al. Deep variational metric learning[C]//Computer Vision–ECCV 2018: 15th European
Conference. Munich, Germany: Springer International Publishing, 2018: 714-729.
[20] YIN X, YU X, SOHN K, et al. Feature transfer learning for deep face recognition with under-represented data[J]. arXiv preprint
arXiv:1803.09014, 2018. https://doi.org/10.48550/arXiv.1803.09014.
[21] DENG J, DONG W, SOCHER R, et al. Imagenet: A large-scale hierarchical image database[C]//2009 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR). IEEE, 2009: 248-255.
[22] RAVI S, LAROCHELLE H. Optimization as a model for few-shot learning[C]// The fourth international conference on learning
representations. The Commonwealth of Puerto Rico, US: OpenReview.net, 2016.
[23] CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual
representations[C]//International conference on machine learning. PMLR, 2020: 1597-1607.
[24] SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning[J]. Advances in neural information processing
systems, 2017, 30:4077-4087.
[25] SUNG F, YANG Y, ZHANG L, et al. Learning to compare: Relation network for few-shot learning[C]//2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018: 1199-1208.
[26] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//International conference
on machine learning. PMLR, 2017: 1126-1135.
[27] ZHANG M, HUANG S, WANG D. Domain generalized few-shot image classification via meta regularization
network[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE,
2022: 3748-3752.
[28] RUSU A A, RAO D, SYGNOWSKI J, et al. Meta-learning with latent embedding optimization[J]. arXiv preprint
arXiv:1807.05960, 2018. https://doi.org/10.48550/arXiv.1807.05960.
[29] ORESHKIN B N, RODRIGUEZ P, LACOSTE A. TADAM: task dependent adaptive metric for improved few-shot learning[J].
Advances in neural information processing systems, 2018, 31:719-729.
[30] JIA J, FENG X, YU H. Few-shot classification via efficient meta-learning with hybrid optimization[J]. Engineering Applications of
Artificial Intelligence, 2024, 127: 107296.
[31] CHEN J, HU Y, SHEN M, et al. Dual Episodic Sampling and Momentum Consistency Regularization for Unsupervised Few-shot
Learning[C]//2023 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2023: 2891-2896.
[32] ZHANG L, ZHOU F, WEI W, et al. Meta-hallucinating prototype for few-shot learning promotion[J]. Pattern Recognition, 2023, 136:
109235.
[33] LEE K, MAJI S, RAVICHANDRAN A, et al. Meta-learning with differentiable convex optimization[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019: 10657-10665.
[34] SUN Q, LIU Y, CHUA T S, et al. Meta-transfer learning for few-shot learning[C]//2019 IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR). IEEE, 2019: 403-412.
|